Crate async_openai
source ·Expand description
Async Rust library for OpenAI REST API based on OpenAPI spec.
Creating client
use async_openai::{Client, config::OpenAIConfig};
// Create a OpenAI client with api key from env var OPENAI_API_KEY and default base url.
let client = Client::new();
// Above is shortcut for
let config = OpenAIConfig::default();
let client = Client::with_config(config);
// OR use API key from different source and a non default organization
let api_key = "sk-..."; // This secret could be from a file, or environment variable.
let config = OpenAIConfig::new()
.with_api_key(api_key)
.with_org_id("the-continental");
let client = Client::with_config(config);
// Use custom reqwest client
let http_client = reqwest::ClientBuilder::new().user_agent("async-openai").build().unwrap();
let client = Client::new().with_http_client(http_client);
Microsoft Azure Endpoints
use async_openai::{Client, config::AzureConfig};
let config = AzureConfig::new()
.with_api_base("https://my-resource-name.openai.azure.com")
.with_api_version("2023-03-15-preview")
.with_deployment_id("deployment-id")
.with_api_key("...");
let client = Client::with_config(config);
// Note that Azure OpenAI service does not support all APIs and `async-openai`
// doesn't restrict and still allows calls to all of the APIs as OpenAI.
Making requests
use async_openai::{Client, types::{CreateCompletionRequestArgs}};
// Create client
let client = Client::new();
// Create request using builder pattern
// Every request struct has companion builder struct with same name + Args suffix
let request = CreateCompletionRequestArgs::default()
.model("text-davinci-003")
.prompt("Tell me the recipe of alfredo pasta")
.max_tokens(40_u16)
.build()
.unwrap();
// Call API
let response = client
.completions() // Get the API "group" (completions, images, etc.) from the client
.create(request) // Make the API call in that "group"
.await
.unwrap();
println!("{}", response.choices.first().unwrap().text);
Examples
For full working examples for all supported features see examples directory in the repository.
Modules
- Client configurations: OpenAIConfig for OpenAI, AzureConfig for Azure OpenAI Service.
- Errors originating from API calls, parsing responses, and reading-or-writing to the file system.
- Types used in OpenAI API requests and responses. These types are created from component schemas in the OpenAPI spec
Structs
- Turn audio into text Related guide: Speech to text
- Given a chat conversation, the model will return a chat completion response.
- Client is a container for config, backoff and http_client used to make API calls.
- Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
- Given a prompt and an instruction, the model will return an edited version of the prompt.
- Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
- Files are used to upload documents that can be used with features like Fine-tuning.
- Manage fine-tuning jobs to tailor a model to your specific training data.
- Given a prompt and/or an input image, the model will generate a new image.
- List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
- Given a input text, outputs if the model classifies it as violating OpenAI’s content policy.